11 research outputs found

    Towards Energy Efficiency in Heterogeneous Processors: Findings on Virtual Screening Methods

    Get PDF
    The integration of the latest breakthroughs in computational modeling and high performance computing (HPC) has leveraged advances in the fields of healthcare and drug discovery, among others. By integrating all these developments together, scientists are creating new exciting personal therapeutic strategies for living longer that were unimaginable not that long ago. However, we are witnessing the biggest revolution in HPC in the last decade. Several graphics processing unit architectures have established their niche in the HPC arena but at the expense of an excessive power and heat. A solution for this important problem is based on heterogeneity. In this paper, we analyze power consumption on heterogeneous systems, benchmarking a bioinformatics kernel within the framework of virtual screening methods. Cores and frequencies are tuned to further improve the performance or energy efficiency on those architectures. Our experimental results show that targeted low‐cost systems are the lowest power consumption platforms, although the most energy efficient platform and the best suited for performance improvement is the Kepler GK110 graphics processing unit from Nvidia by using compute unified device architecture. Finally, the open computing language version of virtual screening shows a remarkable performance penalty compared with its compute unified device architecture counterpart.Ingeniería, Industria y Construcció

    A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Get PDF
    Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC) platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs) has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO). This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.Ingeniería, Industria y Construcció

    A Performance/Cost Model for a CUDA Drug Discovery Application on Physical and Public Cloud Infrastructures

    Get PDF
    Virtual Screening (VS) methods can considerably aid drug discovery research, predicting how ligands interact with drug targets. BINDSURF is an efficient and fast blind VS methodology for the determination of protein binding sites, depending on the ligand, using the massively parallel architecture of graphics processing units(GPUs) for fast unbiased prescreening of large ligand databases. In this contribution, we provide a performance/cost model for the execution of this application on both local system and public cloud infrastructures. With our model, it is possible to determine which is the best infrastructure to use in terms of execution time and costs for any given problem to be solved by BINDSURF. Conclusions obtained from our study can be extrapolated to other GPU‐based VS methodologiesIngeniería, Industria y Construcció

    A922 Sequential measurement of 1 hour creatinine clearance (1-CRCL) in critically ill patients at risk of acute kidney injury (AKI)

    Get PDF
    Meeting abstrac

    Evaluación de plataformas de alto rendimiento para el descubrimiento de fármacos

    No full text
    En la primera década del siglo XXI, la ley de Moore, que ha guiado el desarrollo de procesadores en los últimos cincuenta años, fue puesta en entredicho por la comunidad científica. Esto fue debido principalmente a las limitaciones físicas del silicio que provocaron un cambio de tendencia en el desarrollo de procesadores, tomando como principal estandarte para este cambio el paralelismo. Esta transición ha situado a la programación (masivamente) paralela como la única manera de extraer el máximo rendimiento de las nuevas plataformas de consumo; siendo esto fundamental para abordar los retos científicos de la actualidad. Desafortunadamente, estos retos plantean problemas cuyas necesidades computacionales están fuera del alcance de una sola máquina. Simulaciones como las tratadas en esta Tesis Doctoral, necesitan escalar a grandes centros de cálculo, cuyos costes, sólo están al alcance de grandes instituciones y gobiernos. Sin embargo, la actual situación socio-económica obliga a la utilización eficiente de los recursos. Herramientas como la computación en la nube o voluntaria ofrecen una alternativa para aprovechar al máximo los recursos computacionales de una manera flexible, rápida, económica y respetuosa con el medio ambiente. En esta Tesis Doctoral evaluamos el actual paradigma de computación descrito anteriormente, utilizando como caso de estudio un problema de alta repercusión en la sociedad como es la simulación de cribado virtual, utilizada para el descubrimiento de nuevos fármacos. El estudio ha recorrido los distintos niveles de procesamiento; partiendo por un análisis exhaustivo de las diferentes alternativas a nivel de chip disponibles en el mercado, pasando por la evaluación de los mismos en un entorno clúster, hasta escalar a niveles de computación en la nube y computación voluntaria. Como conclusión de este estudio podemos afirmar que las GPUs están a la vanguardia del desarrollo de aplicaciones científicas con patrones de cómputo masivamente paralelos y altas demandas computacionales, como es el cribado virtual. Además, nos atrevemos a afirmar, con los números mostrados en esta tesis doctoral y los resultados de estas plataformas en los últimos años, que se debería valorar su uso en la gran mayoría de campos científicos que demanden gran capacidad de cómputo. Es posible, sin embargo, que la migración de estas plataformas impliquen replantear el problema desde su origen, pero sin lugar a dudas, esto forma parte del pensamiento computacional, imprescindible en el desarrollo de aplicaciones científica en el estado actual de la computación de altas prestaciones. Por otro lado, si las ejecuciones que se quieren realizar son a mayor escala, se han de valorar diferentes alternativas computacionales diferentes a los tradicionales centros de cómputo; entre ellas la utilización de computación en la nube y la computación voluntaria. La computación en la nube puede ser una opción muy interesante si el cómputo que se desea realizar se desarrolla en periodos de tiempo intermitentes, ya que la no utilización de los recursos locales implica que la inversión económica no quede justificada. Por otro lado, la opción de usar una plataforma de computación voluntaria es muy atractiva, siempre y cuando, la aplicación que se está paralelizando admita ser portada a una plataforma de este tipo; pudiendo conseguir computación de altas prestaciones a un coste cercano a cero. Finalmente, nos gustaría subrayar que esta Tesis Doctoral ha contribuido al desarrollo de una aplicación de cribado virtual y que el uso de la misma debería de ayudar a encontrar nuevos candidatos a fármacos de manera eficiente en términos de rendimiento, energía y costes económicos. In the first decade of the century, the Moore's Law, which has led the microprocessor design in the last fifty years, was put into question by the scientific community. This was mainly due to the physical limitations of silicon-based architectures, which caused a change in the trend of designing processors, guided by parallelism. This transition has placed (massively) programming parallel as the only way to extract the maximum performance to new consumer platforms; being this essential to address today's scientific challenges. Unfortunately, these challenges propose several issues whose computing needs are our of the scope for a single machine. Simulations, such as those discussed in this PhD Thesis, need to scale to large data centers; whose costs are only affordable for large institutions and governments. However, the current socio-economic situation requires an efficient use of resources. Tools such as cloud computing or volunteer computing offer an alternative to exploit computing resources in a flexible, fast, economical and environmentally friendly way. In this dissertation, we evaluate the current landscape of computation, previously described, using as a case study a high-impact problem for society as virtual screening. Virtual screening is a computational tool extensively used for drug discovery. The study has covered all processing levels, starting with an extensive analysis of the different commercially available alternatives at chip level, through their evaluation in a cluster environment, to scale to cloud computing and volunteer computing levels. This study concludes that GPUs are at the leading-edge of the development of scientific applications with massively parallel computing patterns and high computational demands, such as virtual screening. Moreover, this conclusion can be extended to other application fields with the same characteristics in its computation. However, migration to GPUs may cause an application redesign and even rethought, but this is actually part of computational thinking, which is now essential to develop scientific applications for the current state of high performance computing. Different alternatives to the use of a cluster need to be evaluated, such as the use of cloud computing and volunteer computing for a larger-scale executions. Cloud computing can be an interesting option if the computation you may perform is executed periodically, as the non-use of local resources implies that the economic investment is not justified. On the other hand, the option of using a volunteer computing platform looks interesting for developing some kind of hpc applications, as it is offering huge amount of hardware resource at no cost

    Towards Energy Efficiency in Heterogeneous Processors: Findings on Virtual Screening Methods

    No full text
    The integration of the latest breakthroughs in computational modeling and high performance computing (HPC) has leveraged advances in the fields of healthcare and drug discovery, among others. By integrating all these developments together, scientists are creating new exciting personal therapeutic strategies for living longer that were unimaginable not that long ago. However, we are witnessing the biggest revolution in HPC in the last decade. Several graphics processing unit architectures have established their niche in the HPC arena but at the expense of an excessive power and heat. A solution for this important problem is based on heterogeneity. In this paper, we analyze power consumption on heterogeneous systems, benchmarking a bioinformatics kernel within the framework of virtual screening methods. Cores and frequencies are tuned to further improve the performance or energy efficiency on those architectures. Our experimental results show that targeted low‐cost systems are the lowest power consumption platforms, although the most energy efficient platform and the best suited for performance improvement is the Kepler GK110 graphics processing unit from Nvidia by using compute unified device architecture. Finally, the open computing language version of virtual screening shows a remarkable performance penalty compared with its compute unified device architecture counterpart.Ingeniería, Industria y Construcció

    Accelerating Fibre Orientation Estimation from Diffusion Weighted Magnetic Resonance Imaging Using GPUs

    No full text
    With the performance of central processing units (CPUs) having effectively reached a limit, parallel processing offers an alternative for applications with high computational demands. Modern graphics processing units (GPUs) are massively parallel processors that can execute simultaneously thousands of light-weight processes. In this study, we propose and implement a parallel GPU-based design of a popular method that is used for the analysis of brain magnetic resonance imaging (MRI). More specifically, we are concerned with a model-based approach for extracting tissue structural information from diffusion-weighted (DW) MRI data. DW-MRI offers, through tractography approaches, the only way to study brain structural connectivity, non-invasively and in-vivo. We parallelise the Bayesian inference framework for the ball & stick model, as it is implemented in the tractography toolbox of the popular FSL software package (University of Oxford). For our implementation, we utilise the Compute Unified Device Architecture (CUDA) programming model. We show that the parameter estimation, performed through Markov Chain Monte Carlo (MCMC), is accelerated by at least two orders of magnitude, when comparing a single GPU with the respective sequential single-core CPU version. We also illustrate similar speed-up factors (up to 120x) when comparing a multi-GPU with a multi-CPU implementation.Ingeniería, Industria y Construcció

    A Performance/Cost Model for a CUDA Drug Discovery Application on Physical and Public Cloud Infrastructures

    No full text
    Virtual Screening (VS) methods can considerably aid drug discovery research, predicting how ligands interact with drug targets. BINDSURF is an efficient and fast blind VS methodology for the determination of protein binding sites, depending on the ligand, using the massively parallel architecture of graphics processing units(GPUs) for fast unbiased prescreening of large ligand databases. In this contribution, we provide a performance/cost model for the execution of this application on both local system and public cloud infrastructures. With our model, it is possible to determine which is the best infrastructure to use in terms of execution time and costs for any given problem to be solved by BINDSURF. Conclusions obtained from our study can be extrapolated to other GPU‐based VS methodologiesIngeniería, Industria y Construcció

    Subcutaneous anti-COVID-19 hyperimmune immunoglobulin for prevention of disease in asymptomatic individuals with SARS-CoV-2 infection: a double-blind, placebo-controlled, randomised clinical trialResearch in context

    No full text
    Summary: Background: Anti-COVID-19 hyperimmune immunoglobulin (hIG) can provide standardized and controlled antibody content. Data from controlled clinical trials using hIG for the prevention or treatment of COVID-19 outpatients have not been reported. We assessed the safety and efficacy of subcutaneous anti-COVID-19 hyperimmune immunoglobulin 20% (C19-IG20%) compared to placebo in preventing development of symptomatic COVID-19 in asymptomatic individuals with SARS-CoV-2 infection. Methods: We did a multicentre, randomized, double-blind, placebo-controlled trial, in asymptomatic unvaccinated adults (≥18 years of age) with confirmed SARS-CoV-2 infection within 5 days between April 28 and December 27, 2021. Participants were randomly assigned (1:1:1) to receive a blinded subcutaneous infusion of 10 mL with 1 g or 2 g of C19-IG20%, or an equivalent volume of saline as placebo. The primary endpoint was the proportion of participants who remained asymptomatic through day 14 after infusion. Secondary endpoints included the proportion of individuals who required oxygen supplementation, any medically attended visit, hospitalisation, or ICU, and viral load reduction and viral clearance in nasopharyngeal swabs. Safety was assessed as the proportion of patients with adverse events. The trial was terminated early due to a lack of potential benefit in the target population in a planned interim analysis conducted in December 2021. ClinicalTrials.gov registry: NCT04847141. Findings: 461 individuals (mean age 39.6 years [SD 12.8]) were randomized and received the intervention within a mean of 3.1 (SD 1.27) days from a positive SARS-CoV-2 test. In the prespecified modified intention-to-treat analysis that included only participants who received a subcutaneous infusion, the primary outcome occurred in 59.9% (91/152) of participants receiving 1 g C19-IG20%, 64.7% (99/153) receiving 2 g, and 63.5% (99/156) receiving placebo (difference in proportions 1 g C19-IG20% vs. placebo, −3.6%; 95% CI -14.6% to 7.3%, p = 0.53; 2 g C19-IG20% vs placebo, 1.1%; −9.6% to 11.9%, p = 0.85). None of the secondary clinical efficacy endpoints or virological endpoints were significantly different between study groups. Adverse event rate was similar between groups, and no severe or life-threatening adverse events related to investigational product infusion were reported. Interpretation: Our findings suggested that administration of subcutaneous human hyperimmune immunoglobulin C19-IG20% to asymptomatic individuals with SARS-CoV-2 infection was safe but did not prevent development of symptomatic COVID-19. Funding: Grifols
    corecore